翻訳と辞書
Words near each other
・ Social class in Iran
・ Social class in Italy
・ Social class in New Zealand
・ Social class in the United States
・ Social classes of Tibet
・ Social classifieds
・ Social clause
・ Social cleansing
・ Social club
・ Social Club (band)
・ Social Coalition
・ Social Code
・ Social cognition
・ Social cognition and interaction training
・ Social Cognitive and Affective Neuroscience
Social cognitive optimization
・ Social cognitive theory
・ Social cognitive theory of morality
・ Social collaboration
・ Social commentary
・ Social commerce
・ Social comparison bias
・ Social comparison theory
・ Social Compass
・ Social compensation
・ Social competence
・ Social complexity
・ Social computing
・ Social condenser
・ Social conditioning


Dictionary Lists
翻訳と辞書 辞書検索 [ 開発暫定版 ]
スポンサード リンク

Social cognitive optimization : ウィキペディア英語版
Social cognitive optimization
Social cognitive optimization (SCO) is a population-based metaheuristic optimization algorithm which was developed in 2002.〔 This algorithm is based on the social cognitive theory, and the key point of the ergodicity is the process of individual learning of a set of agents with their own memory and their social learning with the knowledge points in the social sharing library. It has been used for solving continuous optimization,〔〔 integer programming,〔 and combinatorial optimization problems. It has been incorporated into the (NLPSolver ) extension of Calc in Apache OpenOffice.
== Algorithm ==

Let f(x) be a global optimization problem, where x is a state in the problem space S. In SCO, each state is called a ''knowledge point'', and the function f is the ''goodness function''.
In SCO, there are a population of N_c cognitive agents solving in parallel, with a social sharing library. Each agent holds a private memory containing one knowledge point, and the social sharing library contains a set of N_L knowledge points. The algorithm runs in ''T'' iterative learning cycles. By running as a Markov chain process, the system behavior in the ''t''th cycle only depends on the system status in the (''t'' − 1)th cycle. The process flow is in follows:
* (Initialization ):Initialize the private knowledge point x_i in the memory of each agent i, and all knowledge points in the social sharing library X, normally at random in the problem space S.
* (Learning cycle ): At each cycle t (t = 1, \ldots, T)
*
* (Observational learning ) For each agent i (i = 1, \ldots, N_c)
*
*
* (Model selection ):Find a high-quality ''model point'' x_M in X(t) , normally realized using tournament selection, which returns the best knowledge point from randomly selected \tau_B points.
*
*
* (Quality Evaluation ):Compare the private knowledge point x_i(t) and the model point x_M,and return the one with higher quality as the ''base point'' x_,and another as the ''reference point'' x_
*
*
* (Learning ):Combine x_ and x_ to generate a new knowledge point x_(t+1). Normally x_(t+1) should be around x_,and the distance with x_ is related to the distance between x_ and x_, and boundary handling mechanism should be incorporated here to ensure that x_(t+1) \in S.
*
*
* (Knowledge sharing ):Share a knowledge point, normally x_i(t+1), to the social sharing library X.
*
*
* (Individual update ):Update the private knowledge of agent i, normally replace x_(t) by x_(t+1). Some Monte Carlo types might also be considered.
*
* (Library Maintenance ):The social sharing library using all knowledge points submitted by agents to update X(t) into X(t+1). A simple way is one by one tournament selection: for each knowledge point submitted by an agent, replace the worse one among \tau_W points randomly selected from X(t).
* (Termination ):Return the best knowledge point found by the agents.
SCO has three main parameters, i.e., the number of agents N_c, the size of social sharing library N_, and the learning cycle T. With the initialization process, the total number of knowledge points to be generated is N_+N_c
*(T+1), and is not related too much with N_ if T is large.
Compared to traditional swarm algorithms, e.g. particle swarm optimization, SCO can achieving high-quality solutions as N_c is small, even as N_c=1. Nevertheless, smaller N_c and N_ might lead to premature convergence. Some variants 〔 were proposed to guaranteed the global convergence.

抄文引用元・出典: フリー百科事典『 ウィキペディア(Wikipedia)
ウィキペディアで「Social cognitive optimization」の詳細全文を読む



スポンサード リンク
翻訳と辞書 : 翻訳のためのインターネットリソース

Copyright(C) kotoba.ne.jp 1997-2016. All Rights Reserved.